-
Notifications
You must be signed in to change notification settings - Fork 1.3k
Feature/smp granular locks v4 #1154
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Feature/smp granular locks v4 #1154
Conversation
@chinglee-iot @aggarg This PR introduces the granular locks changes to the FreeRTOS kernel. Please have a look and we could have discussions/changes in this PR context. Thank you. cc: @ESP-Marius @Dazza0 |
Thank you for your contribution, I'll forward this request to the team. There are few error in the PR can you please try fixing them |
Hello @sudeep-mohanty, I am just following up if you had time to fix the build issues |
@rawalexe Yes! I shall work on the failures and would also do some refactoring for an easier review process. For now, I've put this PR in draft. |
0159a4a
to
5bf8c33
Compare
Hi @sudeep-mohanty, |
5bf8c33
to
9f8acc7
Compare
Hi @ActoryOu, I've made some updates which should fix the CI failures however I could not understand why the link-verifier action fails. Seems more like a script failure to me. So this action would still fail. If you have more information on what is causing it, could you let me know and I shall fix it. Thanks. |
c79962d
to
f50d476
Compare
Created a PR to fix the CI failure - FreeRTOS/FreeRTOS#1292 |
70eb466
to
edc1c98
Compare
|
#if ( portUSING_GRANULAR_LOCKS == 1 ) | ||
#define taskDATA_GROUP_ENTER_CRITICAL_FROM_ISR( pxISRSpinlock, puxSavedInterruptStatus ) \ | ||
do { \ | ||
*( puxSavedInterruptStatus ) = portSET_INTERRUPT_MASK_FROM_ISR(); \ | ||
const BaseType_t xCoreID = ( BaseType_t ) portGET_CORE_ID(); \ | ||
/* Take the ISR spinlock */ \ | ||
portGET_SPINLOCK( xCoreID, ( portSPINLOCK_TYPE * ) pxISRSpinlock ); \ | ||
/* Increment the critical nesting count */ \ | ||
portINCREMENT_CRITICAL_NESTING_COUNT( xCoreID ); \ | ||
} while( 0 ) | ||
#endif /* #if ( portUSING_GRANULAR_LOCKS == 1 ) */ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
#if ( portUSING_GRANULAR_LOCKS == 1 ) | |
#define taskDATA_GROUP_ENTER_CRITICAL_FROM_ISR( pxISRSpinlock, puxSavedInterruptStatus ) \ | |
do { \ | |
*( puxSavedInterruptStatus ) = portSET_INTERRUPT_MASK_FROM_ISR(); \ | |
const BaseType_t xCoreID = ( BaseType_t ) portGET_CORE_ID(); \ | |
/* Take the ISR spinlock */ \ | |
portGET_SPINLOCK( xCoreID, ( portSPINLOCK_TYPE * ) pxISRSpinlock ); \ | |
/* Increment the critical nesting count */ \ | |
portINCREMENT_CRITICAL_NESTING_COUNT( xCoreID ); \ | |
} while( 0 ) | |
#endif /* #if ( portUSING_GRANULAR_LOCKS == 1 ) */ | |
#if ( portUSING_GRANULAR_LOCKS == 1 ) | |
#define taskDATA_GROUP_ENTER_CRITICAL_FROM_ISR( pxISRSpinlock, puxSavedInterruptStatus ) \ | |
do { \ | |
*( puxSavedInterruptStatus ) = portSET_INTERRUPT_MASK_FROM_ISR(); \ | |
{ \ | |
const BaseType_t xCoreID = ( BaseType_t ) portGET_CORE_ID(); \ | |
/* Take the ISR spinlock */ \ | |
portGET_SPINLOCK( xCoreID, ( portSPINLOCK_TYPE * ) ( pxISRSpinlock ) ); \ | |
/* Increment the critical nesting count */ \ | |
portINCREMENT_CRITICAL_NESTING_COUNT( xCoreID ); \ | |
} \ | |
} while( 0 ) | |
#endif /* #if ( portUSING_GRANULAR_LOCKS == 1 ) */ |
#if ( ( portUSING_GRANULAR_LOCKS == 1 ) && ( configNUMBER_OF_CORES > 1 ) ) | ||
UBaseType_t uxSavedInterruptStatus = kernelENTER_CRITICAL_FROM_ISR(); | ||
#endif /* #if ( ( portUSING_GRANULAR_LOCKS == 1 ) && ( configNUMBER_OF_CORES > 1 ) ) */ | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
#if ( ( portUSING_GRANULAR_LOCKS == 1 ) && ( configNUMBER_OF_CORES > 1 ) ) | |
UBaseType_t uxSavedInterruptStatus = kernelENTER_CRITICAL_FROM_ISR(); | |
#endif /* #if ( ( portUSING_GRANULAR_LOCKS == 1 ) && ( configNUMBER_OF_CORES > 1 ) ) */ | |
#if ( ( portUSING_GRANULAR_LOCKS == 1 ) && ( configNUMBER_OF_CORES > 1 ) ) | |
UBaseType_t uxSavedInterruptStatus; | |
#endif /* #if ( ( portUSING_GRANULAR_LOCKS == 1 ) && ( configNUMBER_OF_CORES > 1 ) ) */ | |
traceENTER_xTaskIncrementTick(); | |
#if ( ( portUSING_GRANULAR_LOCKS == 1 ) && ( configNUMBER_OF_CORES > 1 ) ) | |
uxSavedInterruptStatus = kernelENTER_CRITICAL_FROM_ISR(); | |
#endif /* #if ( ( portUSING_GRANULAR_LOCKS == 1 ) && ( configNUMBER_OF_CORES > 1 ) ) */ |
Suggestion for ISO C90 requirement that all variable declarations must be at the beginning of a block.
* called from a critical section within an ISR. */ | ||
#else /* #if ( ! ( ( portUSING_GRANULAR_LOCKS == 1 ) && ( configNUMBER_OF_CORES > 1 ) ) ) */ | ||
/* Lock the kernel data group as we are about to access its members */ | ||
UBaseType_t uxSavedInterruptStatus; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ISO C90 requires all variable declarations must be at the start of a block. Suggest we move the declaration before the trace macros.
#if ( ( portUSING_GRANULAR_LOCKS == 1 ) && ( configNUMBER_OF_CORES > 1 ) ) | ||
PRIVILEGED_DATA static portSPINLOCK_TYPE xTaskSpinlock = portINIT_SPINLOCK_STATIC; | ||
PRIVILEGED_DATA static portSPINLOCK_TYPE xISRSpinlock = portINIT_SPINLOCK_STATIC; | ||
#endif /* #if ( ( portUSING_GRANULAR_LOCKS == 1 ) && ( configNUMBER_OF_CORES > 1 ) ) */ | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
#if ( ( portUSING_GRANULAR_LOCKS == 1 ) && ( configNUMBER_OF_CORES > 1 ) ) | |
PRIVILEGED_DATA static portSPINLOCK_TYPE xTaskSpinlock = portINIT_SPINLOCK_STATIC; | |
PRIVILEGED_DATA static portSPINLOCK_TYPE xISRSpinlock = portINIT_SPINLOCK_STATIC; | |
#endif /* #if ( ( portUSING_GRANULAR_LOCKS == 1 ) && ( configNUMBER_OF_CORES > 1 ) ) */ | |
#if ( ( portUSING_GRANULAR_LOCKS == 1 ) && ( configNUMBER_OF_CORES > 1 ) ) | |
PRIVILEGED_DATA static portSPINLOCK_TYPE xTimerTaskSpinlock = portINIT_SPINLOCK_STATIC; | |
PRIVILEGED_DATA static portSPINLOCK_TYPE xTimerISRSpinlock = portINIT_SPINLOCK_STATIC; | |
#endif /* #if ( ( portUSING_GRANULAR_LOCKS == 1 ) && ( configNUMBER_OF_CORES > 1 ) ) */ | |
Suggest to use different name so that the spinlock name is distinct in the kernel.
…herit() xTaskPriorityInherit() is called inside a critical section from queue.c. This commit moves the critical section into xTaskPriorityInherit(). Co-authored-by: Sudeep Mohanty <[email protected]>
Changed xPreemptionDisable to be a count rather than a pdTRUE/pdFALSE. This allows nested calls to vTaskPreemptionEnable(), where a yield only occurs when xPreemptionDisable is 0. Co-authored-by: Sudeep Mohanty <[email protected]>
Adds the required checks for granular locking port macros. Port Config: - portUSING_GRANULAR_LOCKS to enable granular locks - portCRITICAL_NESTING_IN_TCB should be disabled Granular Locking Port Macros: - Spinlocks - portSPINLOCK_TYPE - portINIT_SPINLOCK( pxSpinlock ) - portINIT_SPINLOCK_STATIC - Locking - portGET_SPINLOCK() - portRELEASE_SPINLOCK() Co-authored-by: Sudeep Mohanty <[email protected]>
- Updated prvCheckForRunStateChange() for granular locks - Updated vTaskSuspendAll() and xTaskResumeAll() - Now holds the xTaskSpinlock during kernel suspension - Increments/decrements xPreemptionDisable. Only yields when 0, thus allowing for nested suspensions across different data groups Co-authored-by: Sudeep Mohanty <[email protected]>
Updated critical section macros with granular locks. Some tasks.c API relied on their callers to enter critical sections. This assumption no longer works under granular locking. Critical sections added to the following functions: - `vTaskInternalSetTimeOutState()` - `xTaskIncrementTick()` - `vTaskSwitchContext()` - `xTaskRemoveFromEventList()` - `vTaskInternalSetTimeOutState()` - `eTaskConfirmSleepModeStatus()` - `xTaskPriorityDisinherit()` - `pvTaskIncrementMutexHeldCount()` Added missing suspensions to the following functions: - `vTaskPlaceOnEventList()` - `vTaskPlaceOnUnorderedEventList()` - `vTaskPlaceOnEventListRestricted()` Fixed the locking in vTaskSwitchContext() vTaskSwitchContext() must aquire both kernel locks, viz., task lock and ISR lock. This is because, vTaskSwitchContext() can be called from either task context or ISR context. Also, vTaskSwitchContext() must not alter the interrupt state prematurely. Co-authored-by: Sudeep Mohanty <[email protected]>
Updated queue.c to use granular locking - Added xTaskSpinlock and xISRSpinlock - Replaced critical section macros with data group critical section macros such as taskENTER/EXIT_CRITICAL/_FROM_ISR() with queueENTER/EXIT_CRITICAL_FROM_ISR(). - Added vQueueEnterCritical/FromISR() and vQueueExitCritical/FromISR() which map to the data group critical section macros. - Added prvLockQueueForTasks() and prvUnlockQueueForTasks() as the granular locking equivalents to prvLockQueue() and prvUnlockQueue() respectively Co-authored-by: Sudeep Mohanty <[email protected]>
Updated event_groups.c to use granular locking - Added xTaskSpinlock and xISRSpinlock - Replaced critical section macros with data group critical section macros such as taskENTER/EXIT_CRITICAL/_FROM_ISR() with event_groupsENTER/EXIT_CRITICAL/_FROM_ISR(). - Added vEventGroupsEnterCritical/FromISR() and vEventGroupsExitCriti/FromISR() functions that map to the data group critical section macros. - Added prvLockEventGroupForTasks() and prvUnlockEventGroupForTasks() to suspend the event group when executing non-deterministic code. - xEventGroupSetBits() and vEventGroupDelete() accesses the kernel data group directly. Thus, added vTaskSuspendAll()/xTaskResumeAll() to these fucntions. Co-authored-by: Sudeep Mohanty <[email protected]>
Updated stream_buffer.c to use granular locking - Added xTaskSpinlock and xISRSpinlock - Replaced critical section macros with data group critical section macros such as taskENTER/EXIT_CRITICAL/_FROM_ISR() with sbENTER/EXIT_CRITICAL_FROM_ISR(). - Added vStreambuffersEnterCritical/FromISR() and vStreambuffersExitCritical/FromISR() to map to the data group critical section macros. - Added prvLockStreamBufferForTasks() and prvUnlockStreamBufferForTasks() to suspend the stream buffer when executing non-deterministic code. Co-authored-by: Sudeep Mohanty <[email protected]>
Updated timers.c to use granular locking - Added xTaskSpinlock and xISRSpinlock - Replaced critical section macros with data group critical section macros such as taskENTER/EXIT_CRITICAL() with tmrENTER/EXIT_CRITICAL(). - Added vTimerEnterCritical() and vTimerExitCritical() to map to the data group critical section macros. Co-authored-by: Sudeep Mohanty <[email protected]>
Co-authored-by: Sudeep Mohanty <[email protected]>
190fc03
to
40a991d
Compare
|
#define queueLOCK( pxQueue ) \ | ||
do { \ | ||
vTaskPreemptionDisable( NULL ); \ | ||
prvLockQueue( ( pxQueue ) ); \ | ||
portGET_SPINLOCK( portGET_CORE_ID(), &( pxQueue->xTaskSpinlock ) ); \ | ||
} while( 0 ) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
#define queueLOCK( pxQueue ) \
do { \
vTaskPreemptionDisable( NULL ); \
prvLockQueue( ( pxQueue ) ); \
// Potential gap here since no lock is acquired \
portGET_SPINLOCK( portGET_CORE_ID(), &( pxQueue->xTaskSpinlock ) ); \
} while( 0 )
- There is a potential gap between prvLockQueue and portGET_SPINLOCK. Suggest we swap the order.
- Suggest we consider to add the following macros in task.h and reuse them in the implementation.
#define taskDATA_GROUP_LOCK( pxTaskSpinlock ) \
do{ \
vTaskPreemptionDisable( NULL ); \
portGET_SPINLOCK( portGET_CORE_ID(), (portSPINLOCK_TYPE *)(pxTaskSpinlock) ); \
} while( 0 )
#define taskDATA_GROUP_UNLOCK( pxTaskSpinlock ) \
do{ \
portRELEASE_SPINLOCK( portGET_CORE_ID(), (portSPINLOCK_TYPE *)(pxTaskSpinlock) ); \
vTaskPreemptionEnable( NULL ); \
} while( 0 )
#define queueLOCK( pxQueue ) \ | |
do { \ | |
vTaskPreemptionDisable( NULL ); \ | |
prvLockQueue( ( pxQueue ) ); \ | |
portGET_SPINLOCK( portGET_CORE_ID(), &( pxQueue->xTaskSpinlock ) ); \ | |
} while( 0 ) | |
#define queueLOCK( pxQueue ) \ | |
do { \ | |
taskDATA_GROUP_LOCK( &( pxQueue->xTaskSpinlock ) ); \ | |
prvLockQueue( ( pxQueue ) ); \ | |
} while( 0 ) |
taskENTER_CRITICAL(); | ||
{ | ||
xInheritanceOccurred = xTaskPriorityInherit( pxQueue->u.xSemaphore.xMutexHolder ); | ||
} | ||
taskEXIT_CRITICAL(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We try to enter the kernel data group critical section only in tasks.c. So we move the taskENTER/EXIT_CRITICAL() here to the tasks.c.
We should remove the "#if ( portUSING_GRANULAR_LOCKS == 1 )" check in xTaskPriorityInherit implementation.
BaseType_t xTaskPriorityInherit( TaskHandle_t const pxMutexHolder )
{
TCB_t * const pxMutexHolderTCB = pxMutexHolder;
BaseType_t xReturn = pdFALSE;
traceENTER_xTaskPriorityInherit( pxMutexHolder );
#if ( portUSING_GRANULAR_LOCKS == 1 )
kernelENTER_CRITICAL();
#endif
...
#if ( portUSING_GRANULAR_LOCKS == 1 )
kernelEXIT_CRITICAL();
#endif
traceRETURN_xTaskPriorityInherit( xReturn );
return xReturn;
}
#if ( ( portUSING_GRANULAR_LOCKS == 1 ) && ( configNUMBER_OF_CORES > 1 ) ) | ||
static void prvLockStreamBufferForTasks( StreamBuffer_t * const pxStreamBuffer ) | ||
{ | ||
/* Disable preemption so that the current task cannot be preempted by another task */ | ||
vTaskPreemptionDisable( NULL ); | ||
|
||
/* Keep holding xTaskSpinlock after unlocking the data group to prevent tasks | ||
* on other cores from accessing the stream buffer while it is suspended. */ | ||
portGET_SPINLOCK( portGET_CORE_ID(), &( pxStreamBuffer->xTaskSpinlock ) ); | ||
} | ||
#endif /* #if ( ( portUSING_GRANULAR_LOCKS == 1 ) && ( configNUMBER_OF_CORES > 1 ) ) */ | ||
/*-----------------------------------------------------------*/ | ||
|
||
#if ( ( portUSING_GRANULAR_LOCKS == 1 ) && ( configNUMBER_OF_CORES > 1 ) ) | ||
static void prvUnlockStreamBufferForTasks( StreamBuffer_t * const pxStreamBuffer ) | ||
{ | ||
/* Release the previously held task spinlock */ | ||
portRELEASE_SPINLOCK( portGET_CORE_ID(), &( pxStreamBuffer->xTaskSpinlock ) ); | ||
|
||
/* Re-enable preemption */ | ||
vTaskPreemptionEnable( NULL ); | ||
} | ||
#endif /* #if ( ( portUSING_GRANULAR_LOCKS == 1 ) && ( configNUMBER_OF_CORES > 1 ) ) */ | ||
/*-----------------------------------------------------------*/ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Similar suggestion to use taskDATA_GROUP_LOCK/taskDATA_GROUP_UNLOCK for these functions.
vTaskPreemptionEnable( NULL ); | ||
|
||
/* Yield if preemption was re-enabled*/ | ||
if( xTaskUnlockCanYield() == pdTRUE ) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
xTaskUnlockCanYield() should be called in critical section. We may create another internal API to return the already yielded information when preemption enabled.
For example,
Basetype_t xCurrentTaskPreemptionEnable( void );
#if ( ( portUSING_GRANULAR_LOCKS == 1 ) && ( configNUMBER_OF_CORES > 1 ) ) | ||
#define portTICK_TYPE_ENTER_CRITICAL() portENTER_CRITICAL_DATA_GROUP( &xTaskSpinlock, &xISRSpinlock ) | ||
#define portTICK_TYPE_EXIT_CRITICAL() portEXIT_CRITICAL_DATA_GROUP( &xTaskSpinlock, &xISRSpinlock ) | ||
#else /* #if ( ( portUSING_GRANULAR_LOCKS == 1 ) && ( configNUMBER_OF_CORES > 1 ) ) */ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
#if ( ( portUSING_GRANULAR_LOCKS == 1 ) && ( configNUMBER_OF_CORES > 1 ) ) | |
#define portTICK_TYPE_ENTER_CRITICAL() portENTER_CRITICAL_DATA_GROUP( &xTaskSpinlock, &xISRSpinlock ) | |
#define portTICK_TYPE_EXIT_CRITICAL() portEXIT_CRITICAL_DATA_GROUP( &xTaskSpinlock, &xISRSpinlock ) | |
#else /* #if ( ( portUSING_GRANULAR_LOCKS == 1 ) && ( configNUMBER_OF_CORES > 1 ) ) */ | |
#if ( ( portUSING_GRANULAR_LOCKS == 1 ) && ( configNUMBER_OF_CORES > 1 ) ) | |
#define portTICK_TYPE_ENTER_CRITICAL() vTaskEnterCritical() | |
#define portTICK_TYPE_EXIT_CRITICAL() VtaskExitCritical() | |
#else /* #if ( ( portUSING_GRANULAR_LOCKS == 1 ) && ( configNUMBER_OF_CORES > 1 ) ) */ |
We should still use kernel data group critical section here.
#if ( portUSING_GRANULAR_LOCKS == 1 ) | ||
#define taskDATA_GROUP_ENTER_CRITICAL( pxTaskSpinlock, pxISRSpinlock ) \ | ||
do { \ | ||
/* Disable preemption to avoid task state changes during the critical section. */ \ | ||
vTaskPreemptionDisable( NULL ); \ | ||
{ \ | ||
const BaseType_t xCoreID = ( BaseType_t ) portGET_CORE_ID(); \ | ||
/* Task spinlock is always taken first */ \ | ||
portGET_SPINLOCK( xCoreID, ( portSPINLOCK_TYPE * ) pxTaskSpinlock ); \ | ||
/* Disable interrupts */ \ | ||
portDISABLE_INTERRUPTS(); \ | ||
/* Take the ISR spinlock next */ \ | ||
portGET_SPINLOCK( xCoreID, ( portSPINLOCK_TYPE * ) pxISRSpinlock ); \ | ||
/* Increment the critical nesting count */ \ | ||
portINCREMENT_CRITICAL_NESTING_COUNT( xCoreID ); \ | ||
} \ | ||
} while( 0 ) | ||
#endif /* #if ( portUSING_GRANULAR_LOCKS == 1 ) */ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
#if ( portUSING_GRANULAR_LOCKS == 1 ) | |
#define taskDATA_GROUP_ENTER_CRITICAL( pxTaskSpinlock, pxISRSpinlock ) \ | |
do { \ | |
/* Disable preemption to avoid task state changes during the critical section. */ \ | |
vTaskPreemptionDisable( NULL ); \ | |
{ \ | |
const BaseType_t xCoreID = ( BaseType_t ) portGET_CORE_ID(); \ | |
/* Task spinlock is always taken first */ \ | |
portGET_SPINLOCK( xCoreID, ( portSPINLOCK_TYPE * ) pxTaskSpinlock ); \ | |
/* Disable interrupts */ \ | |
portDISABLE_INTERRUPTS(); \ | |
/* Take the ISR spinlock next */ \ | |
portGET_SPINLOCK( xCoreID, ( portSPINLOCK_TYPE * ) pxISRSpinlock ); \ | |
/* Increment the critical nesting count */ \ | |
portINCREMENT_CRITICAL_NESTING_COUNT( xCoreID ); \ | |
} \ | |
} while( 0 ) | |
#endif /* #if ( portUSING_GRANULAR_LOCKS == 1 ) */ | |
#if ( portUSING_GRANULAR_LOCKS == 1 ) | |
#define taskDATA_GROUP_ENTER_CRITICAL( pxTaskSpinlock, pxISRSpinlock ) \ | |
do { \ | |
/* Disable preemption to avoid task state changes during the critical section. */ \ | |
vTaskPreemptionDisable( NULL ); \ | |
{ \ | |
const BaseType_t xCoreID = ( BaseType_t ) portGET_CORE_ID(); \ | |
/* Task spinlock is always taken first */ \ | |
portGET_SPINLOCK( xCoreID, ( portSPINLOCK_TYPE * ) ( pxTaskSpinlock ) ); \ | |
/* Disable interrupts */ \ | |
portDISABLE_INTERRUPTS(); \ | |
/* Take the ISR spinlock next */ \ | |
portGET_SPINLOCK( xCoreID, ( portSPINLOCK_TYPE * ) ( pxISRSpinlock ) ); \ | |
/* Increment the critical nesting count */ \ | |
portINCREMENT_CRITICAL_NESTING_COUNT( xCoreID ); \ | |
} \ | |
} while( 0 ) | |
#endif /* #if ( portUSING_GRANULAR_LOCKS == 1 ) */ |
#if ( portUSING_GRANULAR_LOCKS == 1 ) | ||
#define taskDATA_GROUP_EXIT_CRITICAL( pxTaskSpinlock, pxISRSpinlock ) \ | ||
do { \ | ||
const BaseType_t xCoreID = ( BaseType_t ) portGET_CORE_ID(); \ | ||
configASSERT( portGET_CRITICAL_NESTING_COUNT( xCoreID ) > 0U ); \ | ||
/* Release the ISR spinlock */ \ | ||
portRELEASE_SPINLOCK( xCoreID, ( portSPINLOCK_TYPE * ) pxISRSpinlock ); \ | ||
/* Release the task spinlock */ \ | ||
portRELEASE_SPINLOCK( xCoreID, ( portSPINLOCK_TYPE * ) pxTaskSpinlock ); \ | ||
/* Decrement the critical nesting count */ \ | ||
portDECREMENT_CRITICAL_NESTING_COUNT( xCoreID ); \ | ||
/* Enable interrupts only if the critical nesting count is 0 */ \ | ||
if( portGET_CRITICAL_NESTING_COUNT( xCoreID ) == 0 ) \ | ||
{ \ | ||
portENABLE_INTERRUPTS(); \ | ||
} \ | ||
else \ | ||
{ \ | ||
mtCOVERAGE_TEST_MARKER(); \ | ||
} \ | ||
/* Re-enable preemption */ \ | ||
vTaskPreemptionEnable( NULL ); \ | ||
} while( 0 ) | ||
#endif /* #if ( portUSING_GRANULAR_LOCKS == 1 ) */ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
#if ( portUSING_GRANULAR_LOCKS == 1 ) | |
#define taskDATA_GROUP_EXIT_CRITICAL( pxTaskSpinlock, pxISRSpinlock ) \ | |
do { \ | |
const BaseType_t xCoreID = ( BaseType_t ) portGET_CORE_ID(); \ | |
configASSERT( portGET_CRITICAL_NESTING_COUNT( xCoreID ) > 0U ); \ | |
/* Release the ISR spinlock */ \ | |
portRELEASE_SPINLOCK( xCoreID, ( portSPINLOCK_TYPE * ) pxISRSpinlock ); \ | |
/* Release the task spinlock */ \ | |
portRELEASE_SPINLOCK( xCoreID, ( portSPINLOCK_TYPE * ) pxTaskSpinlock ); \ | |
/* Decrement the critical nesting count */ \ | |
portDECREMENT_CRITICAL_NESTING_COUNT( xCoreID ); \ | |
/* Enable interrupts only if the critical nesting count is 0 */ \ | |
if( portGET_CRITICAL_NESTING_COUNT( xCoreID ) == 0 ) \ | |
{ \ | |
portENABLE_INTERRUPTS(); \ | |
} \ | |
else \ | |
{ \ | |
mtCOVERAGE_TEST_MARKER(); \ | |
} \ | |
/* Re-enable preemption */ \ | |
vTaskPreemptionEnable( NULL ); \ | |
} while( 0 ) | |
#endif /* #if ( portUSING_GRANULAR_LOCKS == 1 ) */ | |
#if ( portUSING_GRANULAR_LOCKS == 1 ) | |
#define taskDATA_GROUP_EXIT_CRITICAL( pxTaskSpinlock, pxISRSpinlock ) \ | |
do { \ | |
const BaseType_t xCoreID = ( BaseType_t ) portGET_CORE_ID(); \ | |
configASSERT( portGET_CRITICAL_NESTING_COUNT( xCoreID ) > 0U ); \ | |
/* Release the ISR spinlock */ \ | |
portRELEASE_SPINLOCK( xCoreID, ( portSPINLOCK_TYPE * ) ( pxISRSpinlock ) ); \ | |
/* Release the task spinlock */ \ | |
portRELEASE_SPINLOCK( xCoreID, ( portSPINLOCK_TYPE * ) ( pxTaskSpinlock ) ); \ | |
/* Decrement the critical nesting count */ \ | |
portDECREMENT_CRITICAL_NESTING_COUNT( xCoreID ); \ | |
/* Enable interrupts only if the critical nesting count is 0 */ \ | |
if( portGET_CRITICAL_NESTING_COUNT( xCoreID ) == 0 ) \ | |
{ \ | |
portENABLE_INTERRUPTS(); \ | |
} \ | |
else \ | |
{ \ | |
mtCOVERAGE_TEST_MARKER(); \ | |
} \ | |
/* Re-enable preemption */ \ | |
vTaskPreemptionEnable( NULL ); \ | |
} while( 0 ) | |
#endif /* #if ( portUSING_GRANULAR_LOCKS == 1 ) */ |
#if ( portUSING_GRANULAR_LOCKS == 1 ) | ||
#define taskDATA_GROUP_EXIT_CRITICAL_FROM_ISR( uxSavedInterruptStatus, pxISRSpinlock ) \ | ||
do { \ | ||
const BaseType_t xCoreID = ( BaseType_t ) portGET_CORE_ID(); \ | ||
configASSERT( portGET_CRITICAL_NESTING_COUNT( xCoreID ) > 0U ); \ | ||
/* Decrement the critical nesting count */ \ | ||
portDECREMENT_CRITICAL_NESTING_COUNT( xCoreID ); \ | ||
/* Release the ISR spinlock */ \ | ||
portRELEASE_SPINLOCK( xCoreID, ( portSPINLOCK_TYPE * ) pxISRSpinlock ); \ | ||
if( portGET_CRITICAL_NESTING_COUNT( xCoreID ) == 0 ) \ | ||
{ \ | ||
portCLEAR_INTERRUPT_MASK_FROM_ISR( uxSavedInterruptStatus ); \ | ||
} \ | ||
} while( 0 ) | ||
#endif /* #if ( portUSING_GRANULAR_LOCKS == 1 ) */ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
#if ( portUSING_GRANULAR_LOCKS == 1 ) | |
#define taskDATA_GROUP_EXIT_CRITICAL_FROM_ISR( uxSavedInterruptStatus, pxISRSpinlock ) \ | |
do { \ | |
const BaseType_t xCoreID = ( BaseType_t ) portGET_CORE_ID(); \ | |
configASSERT( portGET_CRITICAL_NESTING_COUNT( xCoreID ) > 0U ); \ | |
/* Decrement the critical nesting count */ \ | |
portDECREMENT_CRITICAL_NESTING_COUNT( xCoreID ); \ | |
/* Release the ISR spinlock */ \ | |
portRELEASE_SPINLOCK( xCoreID, ( portSPINLOCK_TYPE * ) pxISRSpinlock ); \ | |
if( portGET_CRITICAL_NESTING_COUNT( xCoreID ) == 0 ) \ | |
{ \ | |
portCLEAR_INTERRUPT_MASK_FROM_ISR( uxSavedInterruptStatus ); \ | |
} \ | |
} while( 0 ) | |
#endif /* #if ( portUSING_GRANULAR_LOCKS == 1 ) */ | |
#if ( portUSING_GRANULAR_LOCKS == 1 ) | |
#define taskDATA_GROUP_EXIT_CRITICAL_FROM_ISR( uxSavedInterruptStatus, pxISRSpinlock ) \ | |
do { \ | |
const BaseType_t xCoreID = ( BaseType_t ) portGET_CORE_ID(); \ | |
configASSERT( portGET_CRITICAL_NESTING_COUNT( xCoreID ) > 0U ); \ | |
/* Decrement the critical nesting count */ \ | |
portDECREMENT_CRITICAL_NESTING_COUNT( xCoreID ); \ | |
/* Release the ISR spinlock */ \ | |
portRELEASE_SPINLOCK( xCoreID, ( portSPINLOCK_TYPE * ) ( pxISRSpinlock ) ); \ | |
if( portGET_CRITICAL_NESTING_COUNT( xCoreID ) == 0 ) \ | |
{ \ | |
portCLEAR_INTERRUPT_MASK_FROM_ISR( uxSavedInterruptStatus ); \ | |
} \ | |
} while( 0 ) | |
#endif /* #if ( portUSING_GRANULAR_LOCKS == 1 ) */ |
#if ( portUSING_GRANULAR_LOCKS == 1 ) | ||
#define sbENTER_CRITICAL( pxStreamBuffer ) taskDATA_GROUP_ENTER_CRITICAL( &pxStreamBuffer->xTaskSpinlock, &pxStreamBuffer->xISRSpinlock ) | ||
#define sbENTER_CRITICAL_FROM_ISR( pxStreamBuffer, puxSavedInterruptStatus ) taskDATA_GROUP_ENTER_CRITICAL_FROM_ISR( &pxStreamBuffer->xISRSpinlock, puxSavedInterruptStatus ) | ||
#define sbEXIT_CRITICAL( pxStreamBuffer ) taskDATA_GROUP_EXIT_CRITICAL( &pxStreamBuffer->xTaskSpinlock, &pxStreamBuffer->xISRSpinlock ) | ||
#define sbEXIT_CRITICAL_FROM_ISR( uxSavedInterruptStatus, pxStreamBuffer ) taskDATA_GROUP_EXIT_CRITICAL_FROM_ISR( uxSavedInterruptStatus, &pxStreamBuffer->xISRSpinlock ) | ||
#else /* #if ( portUSING_GRANULAR_LOCKS == 1 ) */ | ||
#define sbENTER_CRITICAL( pxStreamBuffer ) taskENTER_CRITICAL(); | ||
#define sbENTER_CRITICAL_FROM_ISR( pxStreamBuffer, puxSavedInterruptStatus ) do { *( puxSavedInterruptStatus ) = taskENTER_CRITICAL_FROM_ISR(); } while( 0 ) | ||
#define sbEXIT_CRITICAL( pxStreamBuffer ) taskEXIT_CRITICAL(); | ||
#define sbEXIT_CRITICAL_FROM_ISR( uxSavedInterruptStatus, pxStreamBuffer ) taskEXIT_CRITICAL_FROM_ISR( uxSavedInterruptStatus ); | ||
#endif /* #if ( portUSING_GRANULAR_LOCKS == 1 ) */ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
#if ( portUSING_GRANULAR_LOCKS == 1 ) | |
#define sbENTER_CRITICAL( pxStreamBuffer ) taskDATA_GROUP_ENTER_CRITICAL( &pxStreamBuffer->xTaskSpinlock, &pxStreamBuffer->xISRSpinlock ) | |
#define sbENTER_CRITICAL_FROM_ISR( pxStreamBuffer, puxSavedInterruptStatus ) taskDATA_GROUP_ENTER_CRITICAL_FROM_ISR( &pxStreamBuffer->xISRSpinlock, puxSavedInterruptStatus ) | |
#define sbEXIT_CRITICAL( pxStreamBuffer ) taskDATA_GROUP_EXIT_CRITICAL( &pxStreamBuffer->xTaskSpinlock, &pxStreamBuffer->xISRSpinlock ) | |
#define sbEXIT_CRITICAL_FROM_ISR( uxSavedInterruptStatus, pxStreamBuffer ) taskDATA_GROUP_EXIT_CRITICAL_FROM_ISR( uxSavedInterruptStatus, &pxStreamBuffer->xISRSpinlock ) | |
#else /* #if ( portUSING_GRANULAR_LOCKS == 1 ) */ | |
#define sbENTER_CRITICAL( pxStreamBuffer ) taskENTER_CRITICAL(); | |
#define sbENTER_CRITICAL_FROM_ISR( pxStreamBuffer, puxSavedInterruptStatus ) do { *( puxSavedInterruptStatus ) = taskENTER_CRITICAL_FROM_ISR(); } while( 0 ) | |
#define sbEXIT_CRITICAL( pxStreamBuffer ) taskEXIT_CRITICAL(); | |
#define sbEXIT_CRITICAL_FROM_ISR( uxSavedInterruptStatus, pxStreamBuffer ) taskEXIT_CRITICAL_FROM_ISR( uxSavedInterruptStatus ); | |
#endif /* #if ( portUSING_GRANULAR_LOCKS == 1 ) */ | |
#if ( portUSING_GRANULAR_LOCKS == 1 ) | |
#define sbENTER_CRITICAL( pxStreamBuffer ) taskDATA_GROUP_ENTER_CRITICAL( &( ( pxStreamBuffer )->xTaskSpinlock ), &( ( pxStreamBuffer )->xISRSpinlock ) ) | |
#define sbENTER_CRITICAL_FROM_ISR( pxStreamBuffer, puxSavedInterruptStatus ) taskDATA_GROUP_ENTER_CRITICAL_FROM_ISR( &( ( pxStreamBuffer )->xISRSpinlock ), puxSavedInterruptStatus ) | |
#define sbEXIT_CRITICAL( pxStreamBuffer ) taskDATA_GROUP_EXIT_CRITICAL( &( ( pxStreamBuffer )->xTaskSpinlock ), &( ( pxStreamBuffer )->xISRSpinlock ) ) | |
#define sbEXIT_CRITICAL_FROM_ISR( uxSavedInterruptStatus, pxStreamBuffer ) taskDATA_GROUP_EXIT_CRITICAL_FROM_ISR( uxSavedInterruptStatus, &( ( pxStreamBuffer )->xISRSpinlock ) ) | |
#else /* #if ( portUSING_GRANULAR_LOCKS == 1 ) */ | |
#define sbENTER_CRITICAL( pxStreamBuffer ) taskENTER_CRITICAL(); | |
#define sbENTER_CRITICAL_FROM_ISR( pxStreamBuffer, puxSavedInterruptStatus ) do { *( puxSavedInterruptStatus ) = taskENTER_CRITICAL_FROM_ISR(); } while( 0 ) | |
#define sbEXIT_CRITICAL( pxStreamBuffer ) taskEXIT_CRITICAL(); | |
#define sbEXIT_CRITICAL_FROM_ISR( uxSavedInterruptStatus, pxStreamBuffer ) taskEXIT_CRITICAL_FROM_ISR( uxSavedInterruptStatus ); | |
#endif /* #if ( portUSING_GRANULAR_LOCKS == 1 ) */ |
@@ -2644,7 +2818,7 @@ static void prvInitialiseNewTask( TaskFunction_t pxTaskCode, | |||
|
|||
traceENTER_uxTaskPriorityGet( xTask ); | |||
|
|||
portBASE_TYPE_ENTER_CRITICAL(); | |||
kernelENTER_CRITICAL(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
portBASE_TYPE_ENTER_CRITICAL() and portBASE_TYPE_EXIT_CRITICAL() is still required for single core port.
#if ( configUSE_TASK_PREEMPTION_DISABLE == 1 ) | ||
&& ( pxCurrentTCBs[ xCoreID ]->uxPreemptionDisable == 0U ) | ||
#endif |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Probably we can consider to remove the preemption disable check here.
- If
xTaskResumeAll()
is called with preemption disabled, but the xYieldPendings is set. The caller function can not yield itself as well since preemption is disabled. When preemption is enabled, xYieldPendings will be handled automatically. - Return value of
xTaskResumeAll()
should not be used when preemption is disabled in the caller function. This will cause scheduler fail to scheduler correct tasks. Similar situation is like calling FreeRTOS API in critical section should be handled carefully.
{ | ||
mtCOVERAGE_TEST_MARKER(); | ||
} | ||
#if ( configUSE_TASK_PREEMPTION_DISABLE == 1 ) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
configUSE_TASK_PREEMPTION_DISABLE should not be used with single core. We probably don't need the following logic.
kernelGET_TASK_LOCK( xCoreID ); | ||
kernelGET_ISR_LOCK( xCoreID ); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The only difference between granular lock and non-granular lock version is to check critical nesting count before acquiring the spinlocks. Can we consider to use compile macro and add comment for explanation instead of clone a new function here?
BaseType_t xYieldCurrentTask; | ||
|
||
/* Get the xYieldPending stats inside the critical section. */ | ||
xYieldCurrentTask = xTaskUnlockCanYield(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Acquiring the yield pending should be performed in critical section. We should move this before release spinlocks.
And also thank you for the logic inside xTaskUnlockCanYield(). I use the following test case to identify that there will be a redundant context switch without checking scheduler suspended.
void test_task_scheduler_suspend_critical_section( void )
{
TaskHandle_t xMainTaskHandle;
/* Create high priority main task. */
xTaskCreate( vSmpTestTask, "SMP Task", configMINIMAL_STACK_SIZE, NULL, 3, &xMainTaskHandle );
/* Start the scheduler. */
vTaskStartScheduler();
vTaskSuspendAll();
vTaskEnterCritical();
/* Request the task to yield. */
vTaskYieldWithinAPI();
/* The scheduler is still suspended. Leaving the critical section should not
* request the core for context switch. */
vTaskExitCritical();
/* Context switch should happens when scheduler is resumed. */
( void ) xTaskResumeAll();
}
|
||
#if ( ( portUSING_GRANULAR_LOCKS == 1 ) && ( configNUMBER_OF_CORES > 1 ) ) | ||
|
||
BaseType_t xTaskUnlockCanYield( void ) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Currently, this function is used three times in the implementation
- event_groups.c - We can consider to add a internal function 'xCurrentTaskEnablePreemption()' to return the already yielding information.
- tasks.c exit critical section - We can use move the implementation here to exit critical section function.
This function should be called in a consecutive critical section. Suggest we consider to remove this function.
This PR adds support for granular locking to the FreeRTOS kernel.
Description
Granular locking introduces the concept of having localized locks per kernel data group for SMP configuration. This method is an optional replacement of the existing kernel locks and is controlled by a new port layer configuration, viz.,
portUSING_GRANULAR_LOCKS
. More details about the approach can be found here.Test Steps
The implementation has been tested on Espressif SoC targets, viz., the ESP32 using the ESP-IDF framework.
1. Testing on an
esp32
targetesp32
, setup the ESP-IDF environment on your local machine. The steps to follow are listed in the Getting Started Guide.components/freertos/test_apps/freertos
where all the test cases are located.performance
subfolder at the same location.idf.py seet-target esp32
.menuconfig
options. To do this you must enter the command-idf.py menuconfig
->Component config
->FreeRTOS
->Kernel
->Run the Amazon SMP FreeRTOS kernel instead (FEATURE UNDER DEVELOPMENT)
. Save the configuration and exit the menuconfig.idf.py build flash monitor
.TODO
Checklist:
Related Issue
By submitting this pull request, I confirm that you can use, modify, copy, and redistribute this contribution, under the terms of your choice.